Doing Better Than UCT: Rational Monte Carlo Sampling in Trees
نویسندگان
چکیده
UCT, a state-of-the art algorithm for Monte Carlo tree sampling (MCTS), is based on UCB, a sampling policy for the Multi-armed Bandit Problem (MAB) that minimizes the accumulated regret. However, MCTS differs from MAB in that only the final choice, rather than all arm pulls, brings a reward, that is, the simple regret, as opposite to the cumulative regret, must be minimized. This ongoing work aims at applying meta-reasoning techniques to MCTS, which is non-trivial. We begin by introducing policies for multi-armed bandits with lower simple regret than UCB, and an algorithm for MCTS which combines cumulative and simple regret minimization and outperforms UCT. We also develop a sampling scheme loosely based on a myopic version of perfect value of information. Finite-time and asymptotic analysis of the policies is provided, and the algorithms are compared empirically.
منابع مشابه
Bandit Based Monte-Carlo Planning
For large state-space Markovian Decision Problems MonteCarlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling...
متن کاملOn Adversarial Search Spaces and Sampling-Based Planning
Upper Confidence bounds applied to Trees (UCT), a banditbased Monte-Carlo sampling algorithm for planning, has recently been the subject of great interest in adversarial reasoning. UCT has been shown to outperform traditional minimax based approaches in several challenging domains such as Go and Kriegspiel, although minimax search still prevails in other domains such as Chess. This work provide...
متن کاملSmooth UCT Search in Computer Poker
Self-play Monte Carlo Tree Search (MCTS) has been successful in many perfect-information twoplayer games. Although these methods have been extended to imperfect-information games, so far they have not achieved the same level of practical success or theoretical convergence guarantees as competing methods. In this paper we introduce Smooth UCT, a variant of the established Upper Confidence Bounds...
متن کاملMonte-Carlo Planning for Pathfinding in Real-Time Strategy Games
In this work, we explore two Monte-Carlo planning approaches: Upper Confidence Tree (UCT) and Rapidlyexploring Random Tree (RRT). These Monte-Carlo planning approaches are applied in a real-time strategy game for solving the path finding problem. The planners are evaluated using a grid-based representation of our game world. The results show that the UCT planner solves the path planning problem...
متن کاملState Aggregation in Monte Carlo Tree Search
Monte Carlo tree search (MCTS) algorithms are a popular approach to online decision-making in Markov decision processes (MDPs). These algorithms can, however, perform poorly in MDPs with high stochastic branching factors. In this paper, we study state aggregation as a way of reducing stochastic branching in tree search. Prior work has studied formal properties of MDP state aggregation in the co...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1108.3711 شماره
صفحات -
تاریخ انتشار 2011